404 research outputs found

    The ethics of big data applications in the consumer sector

    Full text link

    Is cyberpeace possible?

    Full text link

    Constructing and validating a scale of inquisitive curiosity

    Get PDF
    We advance the understanding of the philosophy and psychology of curiosity by operationalizing and constructing an empirical measure of Nietzsche’s conception of inquisitive curiosity, expressed by the German term Wissbegier, (“thirst for knowledge” or “need/impetus to know”) and Neugier (“curiosity” or “inquisitiveness”). First, we show that existing empirical measures of curiosity do not tap the construct of inquisitive curiosity, though they may tap related constructs such as idle curiosity and phenomenological curiosity. Next, we map the concept of inquisitive curiosity and connect it to related concepts, such as open-mindedness and intellectual humility. The bulk of the paper reports four studies: an Anglophone exploratory factor analysis, an Anglophone confirmatory factor analysis, an informant study, and a Germanophone exploratory and confirmatory factor analysis

    The future of FemTech ethics & privacy – a global perspective

    Full text link
    We discuss the concept of women’s empowerment in FemTech, considering cultural and legal differences, ethical concerns, and legal consequences. We claim that it is crucial to prioritize privacy, a fundamental right, especially in the case of changes in laws related to women’s health, such as Roe v. Wade in the US

    Promoting the moral sensitivity of police and military personnel

    Get PDF
    To make good decisions, people must be able to identify the ethical features of a situation, i.e., to notice when and how the welfare of others and ethical values are at stake. In the work of military and law enforcement officers, moral sensitivity is of special importance, due to an especially stressful working environment and the severe consequences that a blindness to moral features may have for diverse parties. As we argue, morally sensitive people overcome three blinders that may lead others to ignore moral aspects in their decision making: Cognitive overload, psychological biases, and moral disengagement. Based on these challenges, we suggest four general learning outcomes for the training of moral sensitivity: (1) an empathic concern for relevant groups, (2) an awareness for one’s vulnerability to biases and stress, (3) moral schemas for the evaluation of risky situations, and (4) a sensitivity to attitudes of moral disengagement. To achieve the relevant learning outcomes in the ethics training of military and police personnel, we offer indicative training examples and references

    Mapping collective behavior--beware of looping

    Get PDF
    We discuss ambiguities of the two main dimensions of the map proposed by Bentley and colleagues that relate to the degree of self-reflection the observed agents have upon their behavior. This self-reflection is a variant of the "looping effect" which denotes that, in social research, the product of investigation influences the object of investigation. We outline how this can be understood as a dimension of "height" in the map of Bentley et a

    Choosing how to discriminate: navigating ethical trade-offs in fair algorithmic design for the insurance sector

    Full text link
    Here, we provide an ethical analysis of discrimination in private insurance to guide the application of non-discriminatory algorithms for risk prediction in the insurance context. This addresses the need for ethical guidance of data-science experts, business managers, and regulators, proposing a framework of moral reasoning behind the choice of fairness goals for prediction-based decisions in the insurance domain. The reference to private insurance as a business practice is essential in our approach, because the consequences of discrimination and predictive inaccuracy in underwriting are different from those of using predictive algorithms in other sectors (e.g., medical diagnosis, sentencing). Here we focus on the trade-off in the extent to which one can pursue indirect non-discrimination versus predictive accuracy. The moral assessment of this trade-off is related to the context of application—to the consequences of inaccurate risk predictions in the insurance domain

    The (Limited) Space for Justice in Social Animals

    Get PDF
    While differentialists deny that non-linguistic animals can have a sense of justice, assimilationists credit some animals with such an advanced moral attitude. We approach this debate from a philosophical perspective. First, we outline the history of the notion of justice in philosophy and how various facets of that notion play a role in contemporary empirical investigations of justice among humans. On this basis, we develop a scheme for the elements of justice-relevant situations and for criteria of justice that should be fruitful in studying both humans and animals. Furthermore, we investigate the conceptual connections between a sense of justice, on the one hand, and various other mental powers, on the other, and indicate which of the latter may be beyond the ken of animals. Next, we consider recent empirical research on justice-related phenomena in animals. We argue for an intermediate position: While animals can at least in principle satisfy some preconditions of justice (intentional action, rule-following), others are problematic, notably possessing a notion of desert. A space for justice in social animals exists, yet it is rather limited compared to the rich cultures of justice in humans. Finally, we reflect on some actual or alleged implications of research on animal justice. As regards justice in humans, one should avoid a simplistic image of "natural justice” as boiling down to equal allocation of goods. As regards justice for animals, one should be weary of the contractualist assumption that only those capable of justice themselves are deserving of "just” treatmen

    The use of artificial intelligence applications in medicine and the standard required for healthcare provider-patient briefings—an exploratory study

    Full text link
    Introduction Digital Health Technologies (DHTs) are currently being funneled through legacy regulatory processes that are not adapted to the unique particularities of this new technology class. In the absence of adequate regulation of DHTs, the briefing of a patient by their healthcare provider (HCP) as a component of informed consent can present the last line of defense before potentially harmful technologies are employed on a patient. Methods This exploratory study utilizes a case vignette of a machine learning-based technology for the diagnosis of ischemic heart disease that is presented to a group of medical students, physicians, and bioethicists. What constitutes the necessary standard and content of the HCP–patient briefings is explored using a survey ( N = 34). Whether participants actually provide a sufficient HCP–patient briefing is evaluated based on audio recordings. Results and Conclusions We find that participants deem artificial intelligence use in medical context should be declared to patients and argue that the explanation should currently follow the standard required of other experimental procedures. Further, since our study provides indications that implementation of HCP–patient briefings lacks behind the identified standard, opportunities for incorporation of training on the use of DHTs into medical curricula and continuous training schedules should be considered

    From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reliance of AI and human experts

    Full text link
    AI is getting more involved in tasks formerly exclusively assigned to humans. Most of research on perceptions and social acceptability of AI in these areas is mainly restricted to the Western world. In this study, we compare trust, perceived responsibility, and reliance of AI and human experts across OECD and Indian sample. We find that OECD participants consider humans to be less capable but more morally trustworthy and more responsible than AI. In contrast, Indian participants trust humans more than AI but assign equal responsibility for both types of experts. We discuss implications of the observed differences for algorithmic ethics and human-computer interaction
    • …
    corecore